The 2020 session 'Author fragmented MPEG-4 content with AVAssetWriter' refers to the 2011 session 'Working with media in AVFoundation'
(and when I say 'refers', I mean it says 'I'm not going to cover this important part - just look at the 2011 video)
However - that session seems to be deleted/gone
Frustrating...
#wwdc20-10011
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I have a simple button style which animates the size of the button when it is pressed.
Unfortunately, this causes the position of the button to animate (which I don't want).
Is there a way to limit the animation only to the scale? I have tried surrounding the offending animation with .animation(nil) and .animation(.none) modifiers - but that didn't work.
import SwiftUI
struct ExampleBackgroundStyle: ButtonStyle {
func makeBody(configuration: Self.Configuration) -> some View {
configuration.label
.padding()
.foregroundColor(.white)
.background(Color.red)
.cornerRadius(40)
.padding(.horizontal, 20)
.animation(nil)
.scaleEffect(configuration.isPressed ? 0.6 : 1.0)
.animation(.easeIn(duration: 0.3))
.animation(nil)
}
}
when I show a button in a sheet, it animates in from the top left (0,0) to the position in the centre of the sheet after the sheet appears.
If I remove the animation on the scaleEffect, then this doesn't happen.
I'm running a webserver for a specific service on port 8900
I'm using telegraph to run the webserver, so that opens and claims the port.
I also want to advertise the service on bonjour - ideally with the correct port.
This is trivial with NetService - but that's deprecated, so I should probably move to the Network framework.
I can advertise without specifying a port
listener = try NWListener(service: service, using: .tcp)
but, then my service broadcasts addresses with port:61443
I can advertise using
listener = try NWListener(using: .tcp, on: <myport>)
however, that fails in my use case because (unsurprisingly) the Listener isn't able to get the port (my server already has it)
Is this just a gap in the new API, or am I missing something?
I have an existing unlisted app on MacOS. I'd like to launch a tvOS version.
However I can't figure out how the users would actually install the app.
Unlike Mac/iOS, there isn't a browser for them to open the install link...
Xcode has this 'special' habit of showing old errors on builds.
Sometimes from long ago.
presumably there is a cache somewhere that isn't getting invalidated.
Clearing the build folder doesn't fix this. Has anyone found a way that does?
I'm trying to run a coreML model.
This is an image classifier generated using:
let parameters = MLImageClassifier.ModelParameters(validation: .dataSource(validationDataSource),
maxIterations: 25,
augmentation: [],
algorithm: .transferLearning(
featureExtractor: .scenePrint(revision: 2),
classifier: .logisticRegressor
))
let model = try MLImageClassifier(trainingData: .labeledDirectories(at: trainingDir.url), parameters: parameters)
I'm trying to run it with the new async Vision api
let model = try MLModel(contentsOf: modelUrl)
guard let modelContainer = try? CoreMLModelContainer(model: model) else {
fatalError("The model is missing")
}
let request = CoreMLRequest(model: modelContainer)
let image = NSImage(named:"testImage")!
let cgImage = image.toCGImage()!
let handler = ImageRequestHandler(cgImage)
do {
let results = try await handler.perform(request)
print(results)
} catch {
print("Failed: \(error)")
}
This gives me
Failed: internalError("Error Domain=com.apple.Vision Code=7 "The VNDetectorProcessOption_ScenePrints required option was not found" UserInfo={NSLocalizedDescription=The VNDetectorProcessOption_ScenePrints required option was not found}")
Please help! Am I missing something?
Topic:
Machine Learning & AI
SubTopic:
Core ML
I'm hitting a limit when trying to train an Image Classifier.
It's at about 16k images (in line with the error info) - and it gives the error:
IOSurface creation failed: e00002be parentID: 00000000 properties: {
IOSurfaceAllocSize = 529984;
IOSurfaceBytesPerElement = 4;
IOSurfaceBytesPerRow = 1472;
IOSurfaceElementHeight = 1;
IOSurfaceElementWidth = 1;
IOSurfaceHeight = 360;
IOSurfaceName = CoreVideo;
IOSurfaceOffset = 0;
IOSurfacePixelFormat = 1111970369;
IOSurfacePlaneComponentBitDepths = (
8,
8,
8,
8
);
IOSurfacePlaneComponentNames = (
4,
3,
2,
1
);
IOSurfacePlaneComponentRanges = (
1,
1,
1,
1
);
IOSurfacePurgeWhenNotInUse = 1;
IOSurfaceSubsampling = 1;
IOSurfaceWidth = 360;
} (likely per client IOSurface limit of 16384 reached)
I feel like I was able to use more images than this before upgrading to Sonoma - but I don't have the receipts....
Is there a way around this?
I have oodles of spare memory on my machine - it's using about 16gb of 64 when it crashes...
code to create the model is
let parameters = MLImageClassifier.ModelParameters(validation: .dataSource(validationDataSource),
maxIterations: 25,
augmentation: [],
algorithm: .transferLearning(
featureExtractor: .scenePrint(revision: 2),
classifier: .logisticRegressor
))
let model = try MLImageClassifier(trainingData: .labeledDirectories(at: trainingDir.url), parameters: parameters)
I have also tried the same training source in CreateML, it runs through 'extracting features', and crashes at about 16k images processed.
Thank you
Topic:
Machine Learning & AI
SubTopic:
Core ML